Using Quasirandom Numbers in Neural Networks

نویسندگان

  • Peter G. Anderson
  • Roger S. Gaborski
  • Ming Ge
  • Sanjay Raghavendra
  • Mei-Ling Lung
چکیده

Peter G. Anderson Roger S. Gaborski Ming Ge Sanjay Raghavendra Mei-Ling Lung Abstract We present a novel training algorithm for a feed forward neural network with a single hidden layer of nodes (i.e., two layers of connection weights). Our algorithm is capable of training networks for hard problems, such as the classic two-spirals problem. The weights in the rst layer are determined using a quasirandom number generator. These weights are frozen|they are never modi ed during the training process. The second layer of weights is trained as a simple linear discriminator using methods such as the pseudoinverse, with possible iterations. We also study the problem of reducing the hidden layer: pruning low-weight nodes and a genetic algorithm search for good subsets. Neural Network Structure Our goal is to construct feed-forward neural networks, such as Fig. 1, which can solve low dimensional twoclass discrimination problems, such as the famous twospirals problem [5]. Solving means that the network must learn the training set and generalize (interpolate and extrapolate) that learning in a reasonable way. The training data for the two-spirals problem, along with the generalization ability of one of our networks, is shown in Fig. 2. In most of our experiments, we use 200 points on each spiral, 400 training exemplars in all. The points lie in a 2a 2a square in R2 centered at the origin. We have experimented with various values of a in the range 1.0{4.0. A feed forward network to classify those points has three inputs, x0, x1, and x2. The pair (x1; x2) represents the input data to be classi ed, and x0 1 is a xed bias input to allow us to use zero as the threshold for the cells in the next layer. The single hidden layer consists of N nodes, denoted y = (y1; y2; : : : ; yN ). The output is the scalar, z. The transfer function of the Input nodes Output node

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Accuracy comparison of Elamn and Jordan artificial neural networks for air particular matter concentration (PM 10) prediction using MODIS satellite images, a case study of Ahvaz.

Due to the complexity of air pollution action, artificial intelligence models specifically, neural networks are utilized to simulate air pollution. So far, numerous artificial neural network models have been used to estimate the concentration of atmospheric PMs. These models have had different accuracies that scholars are constantly exceed their efficiency using numerous parameters. The current...

متن کامل

Solving Fuzzy Equations Using Neural Nets with a New Learning Algorithm

Artificial neural networks have the advantages such as learning, adaptation, fault-tolerance, parallelism and generalization. This paper mainly intends to offer a novel method for finding a solution of a fuzzy equation that supposedly has a real solution. For this scope, we applied an architecture of fuzzy neural networks such that the corresponding connection weights are real numbers. The ...

متن کامل

Solving Fuzzy Equations Using Neural Nets with a New Learning Algorithm

Artificial neural networks have the advantages such as learning, adaptation, fault-tolerance, parallelism and generalization. This paper mainly intends to offer a novel method for finding a solution of a fuzzy equation that supposedly has a real solution. For this scope, we applied an architecture of fuzzy neural networks such that the corresponding connection weights are real numbers. The ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995